Summary:
The deployment of microgrids could be fostered by control systems that do not require very complex modelling, calibration, prediction and/or optimisation processes. This paper explores the application of Reinforcement Learning (RL) techniques for the operation of a microgrid. The implemented Deep Q-Network (DQN) can learn an optimal policy for the operation of the elements of an isolated microgrid, based on the interaction agent-environment when particular operation actions are taken in the microgrid components. In order to facilitate the scaling-up of this solution, the algorithm relies exclusively on historical data from past events, and therefore it does not require forecasts of the demand or the renewable generation. The objective is to minimise the cost of operating the microgrid, including the penalty of non-served power. This paper analyses the effect of considering different definitions for the state of the system by expanding the set of variables that define it. The obtained results are very satisfactory as it can be concluded by their comparison with the perfect-information optimal operation computed with a traditional optimisation model, and with a Naive model.
Keywords: machine learning; microgrids; optimisation methods; power systems; reinforcement learning
JCR Impact Factor and WoS quartile: 3,004 - Q3 (2020); 3,000 - Q3 (2023)
DOI reference: https://doi.org/10.3390/en13112830
Published on paper: June 2020.
Published on-line: June 2020.
Citation:
D. Domínguez-Barbero, J. García-González, M.A. Sanz-Bobi, E.F. Sánchez-Úbeda, Optimising a microgrid system by deep reinforcement learning techniques. Energies. Vol. 13, nº. 11, pp. 2830-1 - 2830-19, June 2020. [Online: June 2020]